154 research outputs found

    SWAPHI: Smith-Waterman Protein Database Search on Xeon Phi Coprocessors

    Full text link
    The maximal sensitivity of the Smith-Waterman (SW) algorithm has enabled its wide use in biological sequence database search. Unfortunately, the high sensitivity comes at the expense of quadratic time complexity, which makes the algorithm computationally demanding for big databases. In this paper, we present SWAPHI, the first parallelized algorithm employing Xeon Phi coprocessors to accelerate SW protein database search. SWAPHI is designed based on the scale-and-vectorize approach, i.e. it boosts alignment speed by effectively utilizing both the coarse-grained parallelism from the many co-processing cores (scale) and the fine-grained parallelism from the 512-bit wide single instruction, multiple data (SIMD) vectors within each core (vectorize). By searching against the large UniProtKB/TrEMBL protein database, SWAPHI achieves a performance of up to 58.8 billion cell updates per second (GCUPS) on one coprocessor and up to 228.4 GCUPS on four coprocessors. Furthermore, it demonstrates good parallel scalability on varying number of coprocessors, and is also superior to both SWIPE on 16 high-end CPU cores and BLAST+ on 8 cores when using four coprocessors, with the maximum speedup of 1.52 and 1.86, respectively. SWAPHI is written in C++ language (with a set of SIMD intrinsics), and is freely available at http://swaphi.sourceforge.net.Comment: A short version of this paper has been accepted by the IEEE ASAP 2014 conferenc

    Analyse betriebswirtschaftlicher Kennzahlen zur Unterstützung von Managemententscheidungen

    Get PDF
    Entscheidungen zu treffen gehört zu den wesentlichen Aufgaben jedes Managers. Die Gefahr von Fehlentscheidungen liegt dabei auf der Hand. Um diese zu vermeiden, ist es von zentraler Bedeutung, eine genaue Analyse der für den Entscheidungsprozess wesentlichen Informationen vorzunehmen. Ziel dieses Papiers ist es, einen Weg aufzuzeigen, wie ein aussagekräftiges Bild der finanzwirtschaftlichen Situation potenzieller Geschäftspartner mithilfe einer fundierten Querschnittsanalyse frei zugänglicher betriebswirtschaftlicher Daten gewonnen werden kann. Dieser Ansatz kann in jeder Entscheidungssituation, in der die Finanzausstattung der betrachteten Unternehmen entscheidungsrelevant ist, genutzt werden.Decision making is one of the most important management tasks. In this context wrong decisions are real risks. In order to avoid them it is necessary to analyse the relevant facts intensively – especially if these decisions affect the selection of a new business partner. Therefore the main goal of this paper is to describe a management analysis using only free accessible data to get a significant impression of the financial situation of potential business partners. This analysis is applicable in each situation where the financial situation of companies is relevant for business decisions

    GPU-accelerated exhaustive search for third-order epistatic interactions in case–control studies

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Journal of Computational Science. The final authenticated version is available online at: https://doi.org/10.1016/j.jocs.2015.04.001[Abstract] Interest in discovering combinations of genetic markers from case–control studies, such as Genome Wide Association Studies (GWAS), that are strongly associated to diseases has increased in recent years. Detecting epistasis, i.e. interactions among k markers (k ≥ 2), is an important but time consuming operation since statistical computations have to be performed for each k-tuple of measured markers. Efficient exhaustive methods have been proposed for k = 2, but exhaustive third-order analyses are thought to be impractical due to the cubic number of triples to be computed. Thus, most previous approaches apply heuristics to accelerate the analysis by discarding certain triples in advance. Unfortunately, these tools can fail to detect interesting interactions. We present GPU3SNP, a fast GPU-accelerated tool to exhaustively search for interactions among all marker-triples of a given case–control dataset. Our tool is able to analyze an input dataset with tens of thousands of markers in reasonable time thanks to two efficient CUDA kernels and efficient workload distribution techniques. For instance, a dataset consisting of 50,000 markers measured from 1000 individuals can be analyzed in less than 22 h on a single compute node with 4 NVIDIA GTX Titan boards. Source code is available at: http://sourceforge.net/projects/gpu3snp/

    ParDRe: faster parallel duplicated reads removal tool for sequencing studies

    Get PDF
    This is a pre-copyedited, author-produced version of an article accepted for publication in Bioinformatics following peer review. The version of record [insert complete citation information here] is available online at: https://doi.org/10.1093/bioinformatics/btw038[Abstract] Summary: Current next generation sequencing technologies often generate duplicated or near-duplicated reads that (depending on the application scenario) do not provide any interesting biological information but can increase memory requirements and computational time of downstream analysis. In this work we present ParDRe , a de novo parallel tool to remove duplicated and near-duplicated reads through the clustering of Single-End or Paired-End sequences from fasta or fastq files. It uses a novel bitwise approach to compare the suffixes of DNA strings and employs hybrid MPI/multithreading to reduce runtime on multicore systems. We show that ParDRe is up to 27.29 times faster than Fulcrum (a representative state-of-the-art tool) on a platform with two 8-core Sandy-Bridge processors. Availability and implementation: Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/pardre

    Ökonomischer Erfolg durch ökologisches Handeln: der FirmenUmweltIndex FUX für nachhaltiges Wirtschaften

    Get PDF
    Der vorliegende Beitrag ist ein Ergebnis der studentischen Forschungs- und Arbeitsgruppe sO2lutions der Technischen Hochschule Wildau (FH). Er geht der Frage nach, wie umweltfreundliches Verhalten eines Unternehmens beeinflusst und gefördert werden kann. Dazu werden zunächst die Eckpfeiler Ökonomie, Ökologie und Ethik und deren Zusammenwirken in einer realökonomischen an Stelle einer geldökonomischen Unternehmensführung erörtert. Anschließend wird der FirmenUmweltIndex FUX als Beitrag zur Verwirklichung umweltgerechter, realökonomischer Unternehmensführung vorgestellt und diskutiert. Dieser personenbezogene Ansatz kann in jedweder organisatorischen Einheit und damit beispielsweise in Wohnungsunternehmen oder auch in Krankenhäusern zur Realisierung von Nachhaltigkeit eingesetzt werden.This report is a result of the research activities of the working group sO2lutions of the Technical University of Applied Sciences Wildau. The main question is how we can realize a positive impact on environmentally friendly behavior of managers and employees in a company. Therefore the report examines the key elements of sustainability: (1) economy, (2) ecology and (3) ethics. The combination of these three parts should be the challenge for corporations world-wide instead of money economic based management. Furthermore, the report presents the FirmenUmweltIndex FUX (Company Environment Index), established by sO2lutions, as a contribution to the implementation of a sustainable management based on economy, ecology and ethics. This approach can be used to realize economic leadership and management under consideration of environmental needs. It can be implemented in each organizational unit e. g. in housing companies or hospitals

    Parallel and Scalable Short-Read Alignment on Multi-Core Clusters Using UPC++

    Get PDF
    [Abstract]: The growth of next-generation sequencing (NGS) datasets poses a challenge to the alignment of reads to reference genomes in terms of alignment quality and execution speed. Some available aligners have been shown to obtain high quality mappings at the expense of long execution times. Finding fast yet accurate software solutions is of high importance to research, since availability and size of NGS datasets continue to increase. In this work we present an efficient parallelization approach for NGS short-read alignment on multi-core clusters. Our approach takes advantage of a distributed shared memory programming model based on the new UPC++ language. Experimental results using the CUSHAW3 aligner show that our implementation based on dynamic scheduling obtains good scalability on multi-core clusters. Through our evaluation, we are able to complete the single-end and paired-end alignments of 246 million reads of length 150 base-pairs in 11.54 and 16.64 minutes, respectively, using 32 nodes with four AMD Opteron 6272 16-core CPUs per node. In contrast, the multi-threaded original tool needs 2.77 and 5.54 hours to perform the same alignments on the 64 cores of one node. The source code of our parallel implementation is publicly available at the CUSHAW3 homepage (http://cushaw3.sourceforge.net).[Resumen]: El crecimiento de los conjuntos de datos de "secuenciamiento de próxima generación" (NGS por sus siglas en inglés) es un reto respecto a la calidad y a la velocidad de alineamientos de secuencias a genomas de referencia. Algunos alineadores disponibles obtienen mapeados de alta calidad a expensas de largos tiempos de ejecución. Desarrollar software rápido y preciso es muy importante para la investigación, ya que la disponibilidad y tamaño de los conjuntos NGS continua creciendo. En este trabajo presentamos una paralelización eficiente para el alineamiento de secuencias cortas de NGS en sistemas con nodos de múltiples núcleos de computación. Nuestra aproximación se aprovecha de un modelo de programación distribuida-compartida basado en el nuevo lenguaje UPC++. Los resultados experimentales usando el alineador CUSHAW3 muestran que nuestra implementación basada en reparto dinámico de trabajo obtiene buena escalabilidad. En nuestra evaluación somos capaces de completar alineamientos sencillos y en parejas de 246 millones de secuencias de longitud 150 en 11.54 y 16.64 minutos, respectivamente, usando 32 nodos con cuatro AMD Opteron 6272 y 16 núcleos de CPU cada uno. Sin embargo, la herramienta multi-hilo original necesita 2.77 y 5.54 horas para completar los mismos alineamientos en los 64 núcleos de un nodo. El código fuente de nuestra implementación paralela está disponible públicamente en la web de CUSHAW3 (http://cushaw3.sourceforge.net).[Resumo]: O medre dos conxuntos de datos de "secuenzamento de próxima xeración" (NGS polas súas siglas en inglés) é un reto respecto á calidade e á velocidade dos aliñamentos de secuencias a xenomas de referencia. Algúns aliñadores disponibles obteñen mapeados de alta calidade a expensas de largos tempos de execución. Desenvolver software rápido e preciso é moi importante para a investigación, xa que a disponibilidade e tamaño dos conxuntos NGS continua a medrar. Neste traballo presentamos unha paralelización eficiente para o aliñamiento de secuencias cortas de NGS en sistemas con nodos de múltiples núcleos de computación. A nosa aproximación aproveitase dun modelo de programación distribuida-compartida basado na nova linguaxe UPC++. Os resultados experimentais que fan uso do aliñador CUSHAW3 mostran que a nosa implementación baseada en reparto dinámico de traballo obtén boa escalabilidade. Na nosa avaliación somos capaces de completar aliñamentos sinxelos e en parellas de 246 millóns de secuencias de lonxitude 150 en 11.54 e 16.64 minutos, respectivamente, empregando 32 nodos con catro AMD Opteron 6272 e 16 núcleos de CPU cada un. Sen embargo, a ferramenta multi-fío oxiginal necesita 2.77 e 5.54 horas para completar os mesmos aliñamientos nos 64 núcleos dun nodo. O código fonte da nosa implementación paralela está disponible públicamente na web de CUSHAW3 (http://cushaw3.sourceforge.net)

    Parallelized short read assembly of large genomes using de Bruijn graphs

    Get PDF
    BACKGROUND: Next-generation sequencing technologies have given rise to the explosive increase in DNA sequencing throughput, and have promoted the recent development of de novo short read assemblers. However, existing assemblers require high execution times and a large amount of compute resources to assemble large genomes from quantities of short reads. RESULTS: We present PASHA, a parallelized short read assembler using de Bruijn graphs, which takes advantage of hybrid computing architectures consisting of both shared-memory multi-core CPUs and distributed-memory compute clusters to gain efficiency and scalability. Evaluation using three small-scale real paired-end datasets shows that PASHA is able to produce more contiguous high-quality assemblies in shorter time compared to three leading assemblers: Velvet, ABySS and SOAPdenovo. PASHA's scalability for large genome datasets is demonstrated with human genome assembly. Compared to ABySS, PASHA achieves competitive assembly quality with faster execution speed on the same compute resources, yielding an NG50 contig size of 503 with the longest correct contig size of 18,252, and an NG50 scaffold size of 2,294. Moreover, the human assembly is completed in about 21 hours with only modest compute resources. CONCLUSIONS: Developing parallel assemblers for large genomes has been garnering significant research efforts due to the explosive size growth of high-throughput short read datasets. By employing hybrid parallelism consisting of multi-threading on multi-core CPUs and message passing on compute clusters, PASHA is able to assemble the human genome with high quality and in reasonable time using modest compute resources

    CUDASW++: optimizing Smith-Waterman sequence database searches for CUDA-enabled graphics processing units

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Smith-Waterman algorithm is one of the most widely used tools for searching biological sequence databases due to its high sensitivity. Unfortunately, the Smith-Waterman algorithm is computationally demanding, which is further compounded by the exponential growth of sequence databases. The recent emergence of many-core architectures, and their associated programming interfaces, provides an opportunity to accelerate sequence database searches using commonly available and inexpensive hardware.</p> <p>Findings</p> <p>Our CUDASW++ implementation (benchmarked on a single-GPU NVIDIA GeForce GTX 280 graphics card and a dual-GPU GeForce GTX 295 graphics card) provides a significant performance improvement compared to other publicly available implementations, such as SWPS3, CBESW, SW-CUDA, and NCBI-BLAST. CUDASW++ supports query sequences of length up to 59K and for query sequences ranging in length from 144 to 5,478 in Swiss-Prot release 56.6, the single-GPU version achieves an average performance of 9.509 GCUPS with a lowest performance of 9.039 GCUPS and a highest performance of 9.660 GCUPS, and the dual-GPU version achieves an average performance of 14.484 GCUPS with a lowest performance of 10.660 GCUPS and a highest performance of 16.087 GCUPS.</p> <p>Conclusion</p> <p>CUDASW++ is publicly available open-source software. It provides a significant performance improvement for Smith-Waterman-based protein sequence database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.</p
    • …
    corecore